26 research outputs found
Long-run dynamics of the U.S. patent classification system
Almost by definition, radical innovations create a need to revise existing
classification systems. In this paper, we argue that classification system
changes and patent reclassification are common and reveal interesting
information about technological evolution. To support our argument, we present
three sets of findings regarding classification volatility in the U.S. patent
classification system. First, we study the evolution of the number of distinct
classes. Reconstructed time series based on the current classification scheme
are very different from historical data. This suggests that using the current
classification to analyze the past produces a distorted view of the evolution
of the system. Second, we study the relative sizes of classes. The size
distribution is exponential so classes are of quite different sizes, but the
largest classes are not necessarily the oldest. To explain this pattern with a
simple stochastic growth model, we introduce the assumption that classes have a
regular chance to be split. Third, we study reclassification. The share of
patents that are in a different class now than they were at birth can be quite
high. Reclassification mostly occurs across classes belonging to the same
1-digit NBER category, but not always. We also document that reclassified
patents tend to be more cited than non-reclassified ones, even after
controlling for grant year and class of origin
How predictable is technological progress?
Recently it has become clear that many technologies follow a generalized
version of Moore's law, i.e. costs tend to drop exponentially, at different
rates that depend on the technology. Here we formulate Moore's law as a
correlated geometric random walk with drift, and apply it to historical data on
53 technologies. We derive a closed form expression approximating the
distribution of forecast errors as a function of time. Based on hind-casting
experiments we show that this works well, making it possible to collapse the
forecast errors for many different technologies at different time horizons onto
the same universal distribution. This is valuable because it allows us to make
forecasts for any given technology with a clear understanding of the quality of
the forecasts. As a practical demonstration we make distributional forecasts at
different time horizons for solar photovoltaic modules, and show how our method
can be used to estimate the probability that a given technology will outperform
another technology at a given point in the future
Can stimulating demand drive costs down? World War II as a natural experiment
For many products, increases in cumulative production are associated with de- creasing unit costs. However, a serious problem of reverse causality (lower prices leading to increasing demand) makes it difficult to use this relationship for pol- icy. We study World War II, during which the demand for military products was largely exogenous, and the correlation between production, cumulative produc- tion and an exogenous time trend was limited. Our results indicate that decreases in cost can be attributed roughly equally to the growth of experience and to an exogenous time trend
Long-run dynamics of the US patent classification system
The dataset contains several files used in the paper "Long-run dynamics of the US patent classification system" by François Lafond and Daniel Kim. The files contain data on the evolution of the number of classes in the United States Patent Classification System between 1836 and 2015, data on the original (at grant date) and current (2015) primary classification at the class level of patents granted between 1976 and 2015, and related supporting files
Recommended from our members
Heuristics in the forecasting of complex time series: Irrational or ecologically rational?
Why is productivity slowing down?
We review recent research on the slowdown of labor productivity and examine the contribution of different explanations to this decline. Comparing the post-2005 period with the preceding decade for five advanced economies, we seek to explain a slowdown of 0.8 to 1.8pp. We trace most of this to lower contributions of TFP and capital deepening, with manufacturing accounting for the biggest sectoral share of the slowdown. No single explanation accounts for the slowdown, but we have identified a combination of factors which, taken together, accounts for much of what has been observed. In the countries we have studied, these are mismeasurement, a decline in the contribution of capital per worker, lower spillovers from the growth of intangible capital, the slowdown in trade, and a lower growth of allocative efficiency. Sectoral reallocation and a lower contribution of human capital may also have played a role in some countries. In addition to our quantitative assessment of explanations for the slowdown, we qualitatively assess other explanations, including whether productivity growth may be declining due to innovation slowing down
Recommended from our members
Models of Complexity for Dynamic Decision Making Derived from Cognitive Informatics, Complexity Theory, and Psychophysics
Reconstructing production networks using machine learning
The vulnerability of supply chains and their role in the propagation of shocks has been highlighted multiple times in recent years, including by the recent pandemic. However, while the importance of micro data is increasingly recognised, data at the firm-to-firm level remains scarcely available. In this study, we formulate supply chain networks’ reconstruction as a link prediction problem and tackle it using machine learning, specifically Gradient Boosting. We test our approach on three different supply chain datasets and show that it works very well and outperforms three benchmarks. An analysis of features’ importance suggests that the key data underlying our predictions are firms’ industry, location, and size. To evaluate the feasibility of reconstructing a network when no production network data is available, we attempt to predict a dataset using a model trained on another dataset, showing that the model’s performance, while still better than a random predictor, deteriorates substantially